首页> 外文OA文献 >Kinect posture reconstruction based on a local mixture of Gaussian process models
【2h】

Kinect posture reconstruction based on a local mixture of Gaussian process models

机译:基于高斯过程模型局部混合的Kinect姿态重构

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Depth sensor based 3D human motion estimation hardware such as Kinect has made interactive applications more popular recently. However, it is still challenging to accurately recognize postures from a single depth camera due to the inherently noisy data derived from depth images and self-occluding action performed by the user. In this paper, we propose a new real-time probabilistic framework to enhance the accuracy of live captured postures that belong to one of the action classes in the database. We adopt the Gaussian Process model as a prior to leverage the position data obtained from Kinect and marker-based motion capture system. We also incorporate a temporal consistency term into the optimization framework to constrain the velocity variations between successive frames. To ensure that the reconstructed posture resembles the accurate parts of the observed posture, we embed a set of joint reliability measurements into the optimization framework. A major drawback of Gaussian Process is its cubic learning complexity when dealing with a large database due to the inverse of a covariance matrix. To solve the problem, we propose a new method based on a local mixture of Gaussian Processes, in which Gaussian Processes are defined in local regions of the state space. Due to the significantly decreased sample size in each local Gaussian Process, the learning time is greatly reduced. At the same time, the prediction speed is enhanced as the weighted mean prediction for a given sample is determined by the nearby local models only. Our system also allows incrementally updating a specific local Gaussian Process in real time, which enhances the likelihood of adapting to run-time postures that are different from those in the database. Experimental results demonstrate that our system can generate high quality postures even under severe self-occlusion situations, which is beneficial for real-time applications such as motion-based gaming and sport training.
机译:最近,基于深度传感器的3D人体运动估计硬件(如Kinect)使交互式应用程序更加流行。然而,由于源自深度图像的固有噪声数据和用户执行的自我遮挡动作,准确地识别来自单个深度相机的姿势仍然是挑战。在本文中,我们提出了一种新的实时概率框架,以提高属于数据库中动作类别之一的实时捕获姿势的准确性。我们采用高斯过程模型作为先验,以利用从Kinect和基于标记的运动捕获系统获得的位置数据。我们还将时间一致性项合并到优化框架中,以约束连续帧之间的速度变化。为了确保重构的姿势类似于观察到的姿势的准确部分,我们将一组联合可靠性度量嵌入到优化框架中。高斯过程的主要缺点是,由于协方差矩阵的逆,它在处理大型数据库时的三次学习复杂性。为了解决该问题,我们提出了一种基于高斯过程的局部混合的新方法,其中在状态空间的局部区域中定义了高斯过程。由于每个局部高斯过程中的样本量显着减少,因此学习时间大大减少。同时,由于仅通过附近的局部模型确定给定样本的加权平均预测,因此可以提高预测速度。我们的系统还允许实时地增量更新特定的本地高斯过程,这增加了适应与数据库中不同的运行时状态的可能性。实验结果表明,即使在严重的自我遮挡情况下,我们的系统也可以生成高质量的姿势,这对于实时应用(例如基于运动的游戏和运动训练)非常有利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号